Razavi Khorasan Province
Efficient Automated Diagnosis of Retinopathy of Prematurity by Customize CNN Models
Saeedi, Farzan, Keshvari, Sanaz, Shoeibi, Nasser
This paper encompasses an in-depth examination of Retinopathy of Prematurity (ROP) diagnosis, employing advanced deep learning methodologies. Our focus centers on refining and evaluating CNN-based approaches for precise and efficient ROP detection. We navigate the complexities of dataset curation, preprocessing strategies, and model architecture, aligning with research objectives encompassing model effectiveness, computational cost analysis, and time complexity assessment. Results underscore the supremacy of tailored CNN models over pre-trained counterparts, evident in heightened accuracy and F1-scores. Implementation of a voting system further enhances performance. Additionally, our study reveals the potential of the proposed customized CNN model to alleviate computational burdens associated with deep neural networks. Furthermore, we showcase the feasibility of deploying these models within dedicated software and hardware configurations, highlighting their utility as valuable diagnostic aids in clinical settings. In summary, our discourse significantly contributes to ROP diagnosis, unveiling the efficacy of deep learning models in enhancing diagnostic precision and efficiency.
- Asia > Middle East > Oman > Muscat Governorate > Muscat (0.04)
- Asia > Middle East > Iran > Razavi Khorasan Province > Mashhad (0.04)
Artificial Intelligence for CRISPR Guide RNA Design: Explainable Models and Off-Target Safety
Abbaszadeh, Alireza, Shahlai, Armita
The CRISPR-Cas genome editing system has rapidly become an indispensable tool across biotechnology and medicine, enabling targeted DNA modifications with unprecedented ease. A single-guide RNA (sgRNA, or simply gRNA) directs the Cas nuclease (such as Cas9 or Cas12a) to a complementary genomic sequence, where the nuclease induces a double-strand break or nucleotide modification. The efficiency and specificity of this process are largely dictated by the gRNA sequence and its interactions with both the target DNA and the cellular environment. Designing optimal gRNAs is therefore critical for successful editing outcomes. Early gRNA design relied on empirical rules and modest machine learning models, but these approaches often struggled to capture the complex determinants of gRNA activity and off-target effects. In recent years, artificial intelligence (AI) - particularly deep learning - has been leveraged to overcome these limitations, learning predictive features from large-scale CRISPR datasets and outperforming previous rule-based methods in guide efficacy prediction[1, 2]. Deep learning models can ingest not only the gRNA and target DNA sequences but also additional contextual information (e.g.
- Asia > China (0.14)
- North America > United States (0.14)
- Asia > Middle East > Iran > Razavi Khorasan Province > Mashhad (0.04)
- (2 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.93)
From Prediction to Simulation: AlphaFold 3 as a Differentiable Framework for Structural Biology
Abbaszadeh, Alireza, Shahlaee, Armita
AlphaFold 3 represents a transformative advancement in computational biology, enhancing protein structure prediction through novel multi-scale transformer architectures, biologically informed cross-attention mechanisms, and geometry-aware optimization strategies. These innovations dramatically improve predictive accuracy and generalization across diverse protein families, surpassing previous methods. Crucially, AlphaFold 3 embodies a paradigm shift toward differentiable simulation, bridging traditional static structural modeling with dynamic molecular simulations. By reframing protein folding predictions as a differentiable process, AlphaFold 3 serves as a foundational framework for integrating deep learning with physics-based molecular
Improving OCR using internal document redundancy
Belzarena, Diego, Mowlavi, Seginus, Artola, Aitor, Mariño, Camilo, Gardella, Marina, Ramírez, Ignacio, Tadros, Antoine, He, Roy, Bottaioli, Natalia, Rajaei, Boshra, Randall, Gregory, Morel, Jean-Michel
Current OCR systems are based on deep learning models trained on large amounts of data. Although they have shown some ability to generalize to unseen data, especially in detection tasks, they can struggle with recognizing low-quality data. This is particularly evident for printed documents, where intra-domain data variability is typically low, but inter-domain data variability is high. In that context, current OCR methods do not fully exploit each document's redundancy. We propose an unsupervised method by leveraging the redundancy of character shapes within a document to correct imperfect outputs of a given OCR system and suggest better clustering. To this aim, we introduce an extended Gaussian Mixture Model (GMM) by alternating an Expectation-Maximization (EM) algorithm with an intra-cluster realignment process and normality statistical testing. We demonstrate improvements in documents with various levels of degradation, including recovered Uruguayan military archives and 17th to mid-20th century European newspapers.
- South America > Uruguay (0.14)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Asia > Middle East > Iran > Razavi Khorasan Province > Mashhad (0.04)
- Asia > China > Hong Kong > Kowloon (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Vision > Optical Character Recognition (0.87)
Deep Positive-Negative Prototypes for Adversarially Robust Discriminative Prototypical Learning
Sabzevar, Ramin Zarei, Mohammadzadeh, Hamed, Tavakoli, Tahmineh, Harati, Ahad
Despite the advantages of discriminative prototype-based methods, their role in adversarial robustness remains underexplored. Meanwhile, current adversarial training methods predominantly focus on robustness against adversarial attacks without explicitly leveraging geometric structures in the latent space, usually resulting in reduced accuracy on the original clean data. We propose a novel framework named Adversarially trained Deep Positive-Negative Prototypes (Adv-DPNP), which integrates discriminative prototype-based learning with adversarial training. Adv-DPNP uses unified class prototypes that serve as both classifier weights and robust anchors in the latent space. Moreover, a novel dual-branch training mechanism maintains stable prototypes by updating them exclusively with clean data, while the feature extractor is trained on both clean and adversarial inputs to increase invariance to adversarial perturbations. In addition, we use a composite loss that combines positive-prototype alignment, negative-prototype repulsion, and consistency regularization to further enhance discrimination, adversarial robustness, and clean accuracy. Extensive experiments on standard benchmarks (CIFAR-10/100 and SVHN) confirm that Adv-DPNP improves clean accuracy over state-of-the-art defenses and baseline methods, while maintaining competitive or superior robustness under a suite of widely used attacks, including FGSM, PGD, C\&W, and AutoAttack. We also evaluate robustness to common corruptions on CIFAR-10-C, where Adv-DPNP achieves the highest average accuracy across severities and corruption types. Additionally, we provide an in-depth analysis of the discriminative quality of the learned feature representations, highlighting the effectiveness of Adv-DPNP in maintaining compactness and clear separation in the latent space.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > Iran > Razavi Khorasan Province > Mashhad (0.04)
- Information Technology > Security & Privacy (0.67)
- Government > Military (0.49)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.68)
Designing Adaptive Algorithms Based on Reinforcement Learning for Dynamic Optimization of Sliding Window Size in Multi-Dimensional Data Streams
Zarghani, Abolfazl, Abedi, Sadegh
Multi-dimensional data streams, prevalent in applications like IoT, financial markets, and real-time analytics, pose significant challenges due to their high velocity, unbounded nature, and complex inter-dimensional dependencies. Sliding window techniques are critical for processing such streams, but fixed-size windows struggle to adapt to dynamic changes like concept drift or bursty patterns. This paper proposes a novel reinforcement learning (RL)-based approach to dynamically optimize sliding window sizes for multi-dimensional data streams. By formulating window size selection as an RL problem, we enable an agent to learn an adaptive policy based on stream characteristics, such as variance, correlations, and temporal trends. Our method, RL-Window, leverages a Dueling Deep Q-Network (DQN) with prioritized experience replay to handle non-stationarity and high-dimensionality. Evaluations on benchmark datasets (UCI HAR, PAMAP2, Yahoo! Finance Stream) demonstrate that RL-Window outperforms state-of-the-art methods like ADWIN and CNN-Adaptive in classification accuracy, drift robustness, and computational efficiency. Additional qualitative analyses, extended metrics (e.g., energy efficiency, latency), and a comprehensive dataset characterization further highlight its adaptability and stability, making it suitable for real-time applications.
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Greece > Central Macedonia > Thessaloniki (0.04)
- Asia > Middle East > Iran > Razavi Khorasan Province > Mashhad (0.04)
- Banking & Finance (0.70)
- Information Technology (0.68)
- Health & Medicine (0.68)
BioPars: A Pretrained Biomedical Large Language Model for Persian Biomedical Text Mining
Merzah, Baqer M., Taami, Tania, Asoudeh, Salman, Mirzaee, Saeed, pour, Amir reza Hossein, Bengari, Amir Ali
Large Language Models (LLMs) have recently gained attention in the life sciences due to their capacity to model, extract, and apply complex biological information. Beyond their classical use as chatbots, these systems are increasingly used for complex analysis and problem-solving in specialized fields, including bioinformatics. First, we introduce BIOPARS-BENCH, a dataset from over 10,000 scientific articles, textbooks, and medical websites. BioParsQA was also introduced to evaluate the proposed model, which consists of 5,231 Persian medical questions and answers. This study then introduces BioPars, a simple but accurate measure designed to assess LLMs for three main abilities: acquiring subject-specific knowledge, interpreting and synthesizing such knowledge, and demonstrating proper evidence. Comparing ChatGPT, Llama, and Galactica, our study highlights their ability to remember and retrieve learned knowledge but also reveals shortcomings in addressing higher-level, real-world questions and fine-grained inferences. These findings indicate the need for further fine-tuning to address the capabilities of LLM in bioinformatics tasks. To our knowledge, BioPars is the first application of LLM in Persian medical QA, especially for generating long answers. Evaluation of four selected medical QA datasets shows that BioPars has achieved remarkable results compared to comparative approaches. The model on BioParsQA achieved a ROUGE-L score of 29.99, which is an improvement over GPT-4 1.0. The model achieved a BERTScore of 90.87 with the MMR method. The MoverScore and BLEURT values were also higher in this model than the other three models. In addition, the reported scores for the model are MoverScore=60.43 and BLEURT=50.78. BioPars is an ongoing project and all resources related to its development will be made available via the following GitHub repository: https://github.com/amirap80/BioPars.
- North America > United States > Indiana (0.04)
- Asia > Middle East > Iraq > Najaf Governorate > Najaf (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Asia > Middle East > Iran > Razavi Khorasan Province > Mashhad (0.04)
Low-Cost Infrared Vision Systems for Improved Safety of Emergency Vehicle Operations Under Low-Visibility Conditions
Naddaf-Sh, M-Mahdi, Lee, Andrew, Yen, Kin, Amini, Eemon, Soltani, Iman
This study investigates the potential of infrared (IR) camera technology to enhance driver safety for emergency vehicles operating in low-visibility conditions, particularly at night and in dense fog. Such environments significantly increase the risk of collisions, especially for tow trucks and snowplows that must remain operational in challenging conditions. Conventional driver assistance systems often struggle under these conditions due to limited visibility. In contrast, IR cameras, which detect the thermal signatures of obstacles, offer a promising alternative. The evaluation combines controlled laboratory experiments, real-world field tests, and surveys of emergency vehicle operators. In addition to assessing detection performance, the study examines the feasibility of retrofitting existing Department of Transportation (DoT) fleets with cost-effective IR-based driver assistance systems. Results underscore the utility of IR technology in enhancing driver awareness and provide data-driven recommendations for scalable deployment across legacy emergency vehicle fleets.
- North America > United States > California > Yolo County > Davis (0.15)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (10 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.87)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Vision (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Adaptive Locally Linear Embedding
Goli, Ali, Alizadeh, Mahdieh, Yazdi, Hadi Sadoghi
Ali Goli 1, Mahdieh Alizadeh 1, and Hadi Sadoghi Yazdi 1,2 1 Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran 2 Center of Excellence in Soft Computing and Intelligent Information Processing, Ferdowsi University of Mashhad, Mashhad, Iran April 10, 2025 Abstract Manifold learning techniques, such as Locally linear embedding (LLE), are designed to preserve the local neighborhood structures of high-dimensional data during dimensionality reduction. Traditional LLE employs Euclidean distance to define neighborhoods, which can struggle to capture the intrinsic geometric relationships within complex data. A novel approach, Adaptive locally linear embedding(ALLE), is introduced to address this limitation by incorporating a dynamic, data-driven metric that enhances topological preservation. This method redefines the concept of proximity by focusing on topological neighborhood inclusion rather than fixed distances. By adapting the metric based on the local structure of the data, it achieves superior neighborhood preservation, particularly for datasets with complex geometries and high-dimensional structures. Experimental results demonstrate that ALLE significantly improves the alignment between neighborhoods in the input and feature spaces, resulting in more accurate and topologically faithful embeddings. Keywords-- Manifold Learning, Adaptive Locally Linear Embedding, Dimensionality Reduction, Topological Preservation, Complex Geometries, High-Dimensional Data, Topological Neighborhood Inclusion, Intrinsic Geometric Relationships 1 Introduction Locally linear embedding(LLE) is a prominent manifold learning technique designed to reduce the dimensionality of high-dimensional datasets while preserving their intrinsic geometric structure. Proposed by Roweis and Saul, LLE operates through a systematic process that includes identifying the K-nearest neighbors for each data point, calculating reconstruction weights to express each point as a linear combination of its neighbors, and ultimately generating a low-dimensional representation that retains local relationships [14]. However, LLE traditionally relies on fixed distance metrics, such as Euclidean distance, which may inadequately represent complex data distributions and fail to capture nuanced topological relationships. In response to these limitations, we introduce a novel approach termed Adaptive LLE(ALLE), which integrates a flexible, data-driven metric into the LLE framework.
- Asia > Middle East > Iran > Razavi Khorasan Province > Mashhad (0.44)
- North America > United States > New York (0.04)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (0.88)
- Research Report > Promising Solution (0.54)
Bayesian Semi-Parametric Spatial Dispersed Count Model for Precipitation Analysis
Nadifar, Mahsa, Bekker, Andriette, Arashi, Mohammad, Ramoelo, Abel
The appropriateness of the Poisson model is frequently challenged when examining spatial count data marked by unbalanced distributions, over-dispersion, or under-dispersion. Moreover, traditional parametric models may inadequately capture the relationships among variables when covariates display ambiguous functional forms or when spatial patterns are intricate and indeterminate. To tackle these issues, we propose an innovative Bayesian hierarchical modeling system. This method combines non-parametric techniques with an adapted dispersed count model based on renewal theory, facilitating the effective management of unequal dispersion, non-linear correlations, and complex geographic dependencies in count data. We illustrate the efficacy of our strategy by applying it to lung and bronchus cancer mortality data from Iowa, emphasizing environmental and demographic factors like ozone concentrations, PM2.5, green space, and asthma prevalence. Our analysis demonstrates considerable regional heterogeneity and non-linear relationships, providing important insights into the impact of environmental and health-related factors on cancer death rates. This application highlights the significance of our methodology in public health research, where precise modeling and forecasting are essential for guiding policy and intervention efforts. Additionally, we performed a simulation study to assess the resilience and accuracy of the suggested method, validating its superiority in managing dispersion and capturing intricate spatial patterns relative to conventional methods. The suggested framework presents a flexible and robust instrument for geographical count analysis, offering innovative insights for academics and practitioners in disciplines such as epidemiology, environmental science, and spatial statistics.
- North America > United States > Iowa (0.25)
- North America > Canada > Alberta (0.14)
- Africa > South Africa > Gauteng > Pretoria (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Public Health (1.00)
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.93)